摘要 :
This paper studies how macroprudential policy tools applied to the housing market can complement the interest rate-based monetary policy in achieving one additional stabilization objective, defined as keeping either economic activ...
展开
This paper studies how macroprudential policy tools applied to the housing market can complement the interest rate-based monetary policy in achieving one additional stabilization objective, defined as keeping either economic activity or credit at some exogenous (and possibly time-varying) levels. We show analytically in a canonical New Keynesian model with housing and collateral constraints that using the loan-to-value (LTV) ratio, tax on credit or tax on property as additional policy instruments does not resolve the inflation-output volatility tradeoff. Perfect targeting of inflation and credit with monetary and macroprudential policy is possible only if the role of housing debt in the economy is sufficiently small. The identified limits to the considered policies are related to their predominantly intertemporal impact on decisions made by financially constrained agents, making them poor complements to monetary policy, which also operates at an intertemporal margin. These limits can be overcome if macroprudential policy is instead designed such that it sufficiently redistributes income between savers and borrowers.
收起
摘要 :
This study examines whether the difference between firms' actual and expected target leverage rAtios affects the cross-sectional risk-return trade-off in the stock market. It uses the financial data of all A-share firms listed and...
展开
This study examines whether the difference between firms' actual and expected target leverage rAtios affects the cross-sectional risk-return trade-off in the stock market. It uses the financial data of all A-share firms listed and traded on the Shanghai and Shenzhen Stock Exchange from 2000 to 2018 in the CSMAR database. We find that the risk-return relationship is negative when the actual leverage is lower than the target leverage. Conversely, the risk-return relation is positive when the actual leverage is higher than the target leverage. This is contrary to the basic principle that the greater the risk, the higher the return in the traditional financial field.
收起
摘要 :
Efficient and dedicated hardware architecture and accelerator micro-engines are crucial implementation forms of MPEG-like video coder. It is significant to excavate and generalize the common technologies and design philosophy of h...
展开
Efficient and dedicated hardware architecture and accelerator micro-engines are crucial implementation forms of MPEG-like video coder. It is significant to excavate and generalize the common technologies and design philosophy of hardwired MPEG-like coders behind number of architectures from academic and industrial communities. This paper makes systematic survey on algorithm and architecture of hardwired MPEG-like coders, from microscopic and macroscopic perspectives, taking H.264/AVC as the analysis target. Recent advances in hardware architectures of prevailing H.264/AVC coders are reviewed and summarized. Furthermore, important algorithm modules, such as integer and fractional pixel motion estimation, mode decision, motion vector prediction, intra prediction, rate control, CABAC coder and deblocking filter are reviewed with detailed analysis on algorithm and hardware architecture. In accordance with the intrinsic characteristics of the algorithm flows, the major design constraints and consideration factors of algorithm and architecture are analyzed respectively. The common technologies of the prevailing architectures are summarized from a systematic perspective, coving different levels ranging from algorithm, architecture, to control and data flows, etc. Based on these analysis, this survey further highlights in-depth summarization and perspectives on MPEG-like coder architecture design. First, the design challenges with multiple target performance optimization are analyzed, and the possible solutions for design challenges are systematically summarized. Second, the rate-distortion-complexity constrained algorithm optimization for MPEG-like video encoder is discussed. Third, typical four-level hierarchical architecture model (SoC system, module, inter-connection, memory) is analyzed, and the pivotal memory architecture and inter-connection architecture are emphasized for analysis. Moreover, the algorithm and architecture design suggestions and preferences for the vital modules are discussed. Fourth, the composite performances of prevailing architectures are evaluated. The concerned target parameters including hardware logic cost, SRAM size, external memory bandwidth, throughput efficiency, power dissipation, and rate-distortion performance are taken as comparison factors. Finally, this paper provides explicit perspectives on future trends of video coder architecture design. The proposed paper can be taken as design reference for H.264/AVC coder hardware architecture, and offer further insight into algorithm and architecture optimization for the new emerging HEVC standard.
收起
摘要 :
This article assesses whether the adoption of inflation targeting (IT) helps reduce the output-inflation tradeoff. We address the self-selection problem of IT policy adoption by the endogenous switching regressions, and show that ...
展开
This article assesses whether the adoption of inflation targeting (IT) helps reduce the output-inflation tradeoff. We address the self-selection problem of IT policy adoption by the endogenous switching regressions, and show that the output-inflation tradeoffs are significantly lower in IT countries not only over the whole sample but also across the developed and developing country subsamples. In addition, we also find strong evidence supporting the positive selection hypothesis, i.e., countries have higher probability of adopting IT are exactly those who benefit more (larger reduction of output-inflation trade-off) from the implementation of IT. Additional results reveal that economies with higher trade openness, lower financial openness, and less flexible exchange rate regime are associated with larger tradeoff between output and inflation, but the effects are only statistically significant in the IT regime. (C) 2019 Elsevier Ltd. All rights reserved.
收起
摘要 :
Tracking mobile targets using low-cost wireless sensor network (WSN) requires not only good tracking accuracy but also network longevity. Cluster-based tracking protocols leverage the fact that only sensors in the vicinity of the ...
展开
Tracking mobile targets using low-cost wireless sensor network (WSN) requires not only good tracking accuracy but also network longevity. Cluster-based tracking protocols leverage the fact that only sensors in the vicinity of the target can contribute to target detection, while other sensors should sleep to save energy, which provides good tradeoff between energy efficiency and tracking accuracy. However, for the complexity of cluster-based tracking protocols, it is challenging to quantify the tradeoff between energy efficiency and tracking accuracy. In this paper, a convolution-based method is presented to quantify the relationship between the cluster parameters and the energy-quality metrics of the tracking system, which provides Pareto optimal parameters to jointly optimise the energy efficiency and the tracking accuracy of cluster-based WSN tracking system. The presented results are verified in popular cluster-based tracking protocols via extensive simulations, which shows the effectiveness of the optimisation framework.
收起
摘要 :
Physiologists often assume that mitochondria are the main producers of reactive oxygen species (ROS) in cells. Consequently, in biomedicine, mitochondria are considered as important targets for therapeutic treatments, and in evolu...
展开
Physiologists often assume that mitochondria are the main producers of reactive oxygen species (ROS) in cells. Consequently, in biomedicine, mitochondria are considered as important targets for therapeutic treatments, and in evolutionary biology, they are considered as mediators of life-history tradeoffs. Surprisingly, data supporting such an assumption are lacking, at least partially due to the technical difficulties in accurately measuring the level of ROS produced by different subcellular compartments in intact cells. In this Commentary, we first review three potential reasons underlying the misassumption of mitochondrial dominance in the production of cellular ROS. We then introduce some other major sites/enzymes responsible for cellular ROS production. With the use of a recently developed cell-based assay, we further discuss the contribution of mitochondria to the total rate of ROS release in cell lines and primary cells of different species. In these cells, the contribution of mitochondria varies between cell types but mitochondria are never the main source of cellular ROS. This indicates that although mitochondria are one of the significant sources of cellular ROS, they are not necessarily the main contributor under normal conditions. Intriguingly, similar findings were also observed in cells under a variety of stressors, life-history strategies and pathological stages, in which the rates of cellular ROS production were significantly enhanced. Finally, we make recommendations for designing future studies. We hope this paper will encourage investigators to carefully consider non-mitochondrial sources of cellular ROS in their study systems or models.
收起
摘要 :
Reconfiguration can be used to improve system performance, increase fault-tolerance, recover from failures, prevent and recover from cyberattacks. The process of using reconfiguration to increase a system's resilience to cyberatta...
展开
Reconfiguration can be used to improve system performance, increase fault-tolerance, recover from failures, prevent and recover from cyberattacks. The process of using reconfiguration to increase a system's resilience to cyberattacks is called Moving Target Defense (MTD). While system reconfiguration has many advantages, it brings potential disadvantages such as performance and availability degradation when a system is undergoing reconfiguration. This paper analyzes the use of MTD to reduce the frequency of cyberattacks and develops a closed-form analytic model that captures the tradeoffs between resilience to cyberattacks and system performance. The state-of-the art in MTD discusses how proposed methods operate and uses experimentation to demonstrate their use. This paper fills a research gap in terms of developing formal models that analyze tradeoffs between performance and resilience to cyberattacks. More specifically, the main contributions of this paper include: (1) A closed-form analytic model for computing the probability that an attacker is successful in completing the reconnaissance phase of the attack and the computation of the task's execution time when regular reconfigurations are used. (2) An optimization model for determining the optimal task reconfiguration frequency in a way that takes into account given performance-security tradeoffs. (3) A closed-form analytic model for computing the probability that an attacker is successful in completing the reconnaissance phase of the attack and the computation of the task's execution time when irregular reconfigurations are used. (4) Derivation of the maximum probability that an attacker fails in completing the reconnaissance phase under irregular reconfiguration intervals.
收起
摘要 :
Background Conventional phase I algorithms for finding a phase-2 recommended dose (P2RD) based on toxicity alone is problematic because the maximum tolerated dose (MTD) is not necessarily the optimal dose with the most desirable r...
展开
Background Conventional phase I algorithms for finding a phase-2 recommended dose (P2RD) based on toxicity alone is problematic because the maximum tolerated dose (MTD) is not necessarily the optimal dose with the most desirable risk–benefit trade-off. Moreover, the increasingly common practice of treating an expansion cohort at a chosen MTD has undesirable consequences that may not be obvious. Patients and methods We review the phase I–II paradigm and the EffTox design, which utilizes both efficacy and toxicity to choose optimal doses for successive patient cohorts and find the optimal P2RD. We conduct a computer simulation study to compare the performance of the EffTox design with the traditional 3?+?3 design and the continuous reassessment method. Results By accounting for the risk–benefit trade-off, the EffTox phase I–II design overcomes the limitations of conventional toxicity-based phase I designs. Numerical simulations show that the EffTox design has higher probabilities of identifying the optimal dose and treats more patients at the optimal dose. Conclusions Phase I–II designs, such as the EffTox design, provide a coherent and efficient approach to finding the optimal P2RD by explicitly accounting for risk–benefit trade-offs underlying medical decisions.
收起
摘要 :
Purpose The purpose of this paper is to test the validity of dynamic tradeoff theory and argue that the speed of adjustment toward the target capital structure may vary depending primarily on some inherent firm characteristics. De...
展开
Purpose The purpose of this paper is to test the validity of dynamic tradeoff theory and argue that the speed of adjustment toward the target capital structure may vary depending primarily on some inherent firm characteristics. Design/methodology/approach The objective of this article is to study the impact of the corporate governance arrangements on the capital structure behavior taken by listed French firms. The author measures the corporate governance arrangements in three different ways to capture its influences on the capital structure and analyze how it affects a firm's rebalancing behavior in the presence of relevant control variables. Assuming that costs related to deviations from the target leverage are positively correlated with the duration of the deviation, the author finds that firms with a strong governance system adjust at a faster rate because the longer the deviation lasts, the greater the loss in firm value. In addition, firms with more efficient governance structures face lower adjustment costs. Findings The author measures corporate governance quality in different ways by using several proxies. The results make a major contribution to the literature and show that the quality of the governance system is an important factor in helping the company achieve fatly its target leverage. The authors produces further support for the initial finding by showing that the two extreme leverage deviation groups are dominated by firms with weak governance. The author also shows that the rebalancing speed is faster for firms with strong governance systems. Originality/value The paper proposes that a firm characterized by a strong governance system will display a shorter-duration deviation from the target capital structure and a higher adjustment level than a firm with weak governance. In other words, the author argues that the deviation from the target capital structure and the adjustment level are related to the quality of corporate governance. The results indicate that firms with a stronger governance structure are characterized by shorter-term deviations from the target. The author also finds that firms belonging to the two subsamples where leverage deviation is at extremely high or low levels are characterized by a weak governance system. The results corroborate the hypothesis on the speed of adjustment toward the desired target leverage. Furthermore, the author empirically proves that the adjustment level of firms with stronger governance is higher in both extreme leverage situations. This paper extends the existing literature on capital structure adjustment by introducing the effect of corporate governance.
收起
摘要 :
The dividing wall distillation column has a larger number ofdesign variables than a conventional column.For design of the colum,it will be desirable to define a priori the feasible space over which all the designs lie.An attempt h...
展开
The dividing wall distillation column has a larger number ofdesign variables than a conventional column.For design of the colum,it will be desirable to define a priori the feasible space over which all the designs lie.An attempt has been made in this paper to address this problem through a graphical representation of all the possible dividing wallcolumn(DWC) designs for s specified separation of a ternary feed.The development of the theory is based on splitting the dividing wallcolumn into three simple columns(a prefractiionator and two downstream columns)and applying the shortcut methods of Fenske,Underwood,Gilliland and Kirkbride.For specified terminal product compositions,it is shown that the design space can be constructed on a 3-dimensional plot,the axes being the flow rates of two of the components in the 'net distillate'from the prefractionator(dividing wallcolumn being representable as a Petlyuk system),and the effective reflux ratio of the prefractionator.For ease of graphical representation,the designs will be projected on toa 2 dimensional space of prefractionator output flow rate variables fora fixed prefractionator reflux ratio.Constraints felated to the availability of feed conponents to downstream columns,infeasible reflux ratio and imbalance in plate assigmnent on either side of the wall are also placedon the s dimenisonal design space to generate a feasible design space.On this design space,enveloped by various constraints,various equi-parameter curves are drawn depicting locus of points on which the chosen parameter has a constant value.The parameter chosen can be either the total number ofcolumn plates or the number of plates above/below the dividing wall,reboiler duty,or the cost.The design space proposed in this paper,even though it used the shortuct methods.provides the designer with a brad view of that alldesigns are available,out of which some attractive options may be explored further.The location of equi-cost or equi-energy curves assist the designer in identifying design changes which could lead to either decreased cost of decreased energy.
收起